8 research outputs found

    Towards an in-depth understanding of privacy parameters for randomized sanitization mechanisms

    Get PDF
    Differential privacy, and close other notions such as dχd_\chi-privacy, is at the heart of the privacy framework when considering the use of randomization to ensure data privacy. Such a guarantee is always submitted to some trade-off between the privacy level and the accuracy of the result. While a privacy parameter of the differentially private algorithms leverages this trade-off, it is often a hard task to choose a meaningful value for this numerical parameter. Only a few works have tackled this issue, and the present paper\u27s goal is to continue this effort in two ways. First, we propose a generic framework to decide whether a privacy parameter value is sufficient to prevent from some pre-determined and well-understood risks for privacy. Second, we instantiate our framework on mobility data from real-life datasets, and show some insightful features necessary for practical applications of randomized sanitization mechanisms. In our framework, we model scenarii where an attacker\u27s goal is to de-sanitize some data previously sanitized in the sense of dχd_{\chi}-privacy, a privacy guarantee close to that of differential privacy. To each attack is associated a meaningful risk of data disclosure, and the level of success for the attack suggests a relevant value for the corresponding privacy parameter

    Differentially private instance-based noise mechanisms in practice

    Get PDF
    Differential privacy is a widely used privacy model today, whose privacy guarantees are obtained to the price of a random perturbation of the result. In some situations, basic differentially private mechanisms may add too much noise to reach a reasonable level of privacy. To answer this shortcoming, several works have provided more technically involved mechanisms, using a new paradigm of differentially private mechanisms called instance-based noise mechanisms. In this paper, we exhibit for the first time theoretical conditions for an instance-based noise mechanism to be (epsilon, delta) differentially private. We exploit the simplicity of these conditions to design a novel instance-based noise differentially private mechanism. Conducting experimental evaluations, we show that our mechanism compares favorably to existing instance-based noise mechanisms, either regarding time complexity or accuracy of the sanitized result. By contrast with some prior works, our algorithms do not involve the computation of all local sensitivities, a computational task which was proved to be NP hard in some cases, namely for statistic queries on graphs. Our framework is as general as possible and can be used to answer any query, which is in contrast with recent designs of instance-based noise mechanisms where only graph statistics queries are considered

    Privacy in trajectory micro-data publishing : a survey

    Get PDF
    We survey the literature on the privacy of trajectory micro-data, i.e., spatiotemporal information about the mobility of individuals, whose collection is becoming increasingly simple and frequent thanks to emerging information and communication technologies. The focus of our review is on privacy-preserving data publishing (PPDP), i.e., the publication of databases of trajectory micro-data that preserve the privacy of the monitored individuals. We classify and present the literature of attacks against trajectory micro-data, as well as solutions proposed to date for protecting databases from such attacks. This paper serves as an introductory reading on a critical subject in an era of growing awareness about privacy risks connected to digital services, and provides insights into open problems and future directions for research.Comment: Accepted for publication at Transactions for Data Privac

    Privacy in trajectory micro-data publishing: a survey

    Get PDF
    International audienceWe survey the literature on the privacy of trajectory micro-data, i.e., spatiotemporal information about the mobility of individuals, whose collection is becoming increasingly simple and frequent thanks to emerging information and communication technologies. The focus of our review is on privacy-preserving data publishing (PPDP), i.e., the publication of databases of trajectory micro-data that preserve the privacy of the monitored individuals. We classify and present the literature of attacks against trajectory micro-data, as well as solutions proposed to date for protecting databases from such attacks. This paper serves as an introductory reading on a critical subject in an era of growing awareness about privacy risks connected to digital services, and provides insights into open problems and future directions for research

    Malware and Ransomware Detection Models

    Full text link
    Cybercrime is one of the major digital threats of this century. In particular, ransomware attacks have significantly increased, resulting in global damage costs of tens of billion dollars. In this paper, we train and test different Machine Learning and Deep Learning models for malware detection, malware classification and ransomware detection. We introduce a novel and flexible ransomware detection model that combines two optimized models. Our detection results on a limited dataset demonstrate good accuracy and F1 scores

    Privacy in trajectory micro-data publishing: a survey

    No full text
    International audienceWe survey the literature on the privacy of trajectory micro-data, i.e., spatiotemporal information about the mobility of individuals, whose collection is becoming increasingly simple and frequent thanks to emerging information and communication technologies. The focus of our review is on privacy-preserving data publishing (PPDP), i.e., the publication of databases of trajectory micro-data that preserve the privacy of the monitored individuals. We classify and present the literature of attacks against trajectory micro-data, as well as solutions proposed to date for protecting databases from such attacks. This paper serves as an introductory reading on a critical subject in an era of growing awareness about privacy risks connected to digital services, and provides insights into open problems and future directions for research
    corecore